Sunday, November 21, 2004

Hierarchical VMC (was Probabilistic Functions III)

A couple comments from the last post.

I had wondered whether it was possible for VMC to get as good of results as DMC. In theory, of course, the answer is "yes". Provided there is enough variational freedom. This is the hard part.


On to the second point.

I had proposed a variational wavefunction consisting of the sum of gaussians. The centers and widths were then the parameters to be optimized. The starting center points were to be generated by sampling from a more traditional wavefunction. Why bother? Why not start from a uniform distribution?


My speculative answer is that it would be horribly inefficient, and very likely to follow wrong paths (since the wavefunction is so "floppy"). So starting from a traditional wavefunction is a form of accelerating the convergence of the optimization procedure.


The idea seems to be one of hierarchical VMC, or of a hierarchical wavefunction that can be optimized in steps. The coarse parts are optimized first, and then the finer details are added later. Vaguely like a wavelet decomposition or a multigrid method deals with spatial resolution.


It still remains to be seen if the sum-of-gaussians wavefunction survives contact with reality.

Thursday, November 18, 2004

Probabilistic Functions II

Last time I looked at the sequence of values obtained from sampling a function. I was trying to understand what would happen if used a sum of gaussians, each centered at one of these values.


I was confused because I had a set of sample points, but also a function using those points that could be considered a variational trial function in its own right.
The width of gaussians can be taken as a variational parameter, and optimized.


The idea for a MC scheme would be to generate a set of sample points from a trial function. Then the diffusion algorithm would be stepped forward in time to get a better approximation to the wave function.


But we can go further and consider all the center points as variational parameters as well. And we don't actually care about the diffusion part - it is only a device to relax the wavefunction to the ground state. We could use any optimization method instead. Now this form for the wavefunction is likely to have lots linear dependencies. Some optimization methods may have problems with this.


One could simply do a random walk - try moving a point, evaluate the energy (with VMC), and accept or reject the move based on the energy difference. Could one get DMC-like quality from VMC?


I can see a couple possible problems, so far. Getting the boundary conditions and cusp conditions correct may be difficult. The "basis" functions don't need to be a set of gaussians - just positive functions with a center point and a width. Also, there may be so much variational freedom that convergence takes a long time.

Wednesday, November 17, 2004

Probabilistic Functions

A function, f(x). We can treat it as a black box - feed it x, and get back f(x). But what if we have different sort of box - one with a big green button. Push the button and it spits out an 'x' value. The only way to ascertain f(x) is to keep track of the density of the values of 'x' that come out of the box - where f(x) is large, there will be lots of x's, and not so many where f(x) is small.


If we want to compute a normalized average over a weight function f(x), this output is exactly what we want. Now suppose we want to compute derivatives of f(x) (the energy in QMC calculations). In VMC, we can open the box and there is a analytic form for f(x) sitting there that we can take derivatives. But in DMC, the situation may not be so simple - f(x) may be determined solely by the distribution of x's. (I was trying to figure out how to compute the energy when doing DMC w/o a trial function)


One way to compute the derivative of f(x) is to keep a histogram of x's and take the derivative after the fact. However, this solution seems noise prone, and not likely to scale well in multiple dimensions.


Each sample is a delta function. Integrals over the function become integrals over a sum of delta functions, centered around the sample points. What if we replaced the delta functions by a gaussian? (since that is delta function in the limit of vanishing width). Each point then represents a probability distribution with a finite width.


The first attempt at representing the ground state wavefunction of an infinite square well failed miserably - I could get any value of the energy I wanted by varying the width of the gaussians. The problem is the boundary conditions - the wavefunction must vanish at the edges. So then I tried using image gaussians (outside of the square well) to force the appropriate BC's. That worked much better. So well, in fact, that the value of the energy is quite good even when the center points are drawn from a uniform distribution, although I suspect this is largely a feature of simplicity of the potential).


So what now?

With this sort of representation, I'm wondering if it's possible to use a forward Euler scheme to propagate the distribution forward in time (ie use a first order approximation to the time derivative in the diffusion eqn.). Either symbolically (ie, complicated formulas related to the original points) or by sampling (ie, generate a new set of points).


Normally, one thinks of using orthogonal function expansions, since linearly dependent functions don't add anything new to the function. This sum of gaussians is definitely not orthogonal (well, they become more so at small widths), especially since the points are randomly distributed according to some distribution. Hmm. This contrast seems relevant somehow. The gaussians are all positive, and orthogonal functions usually have negative regions - I don't know if this is relevant or not.

Thursday, November 11, 2004

Comments for Everyone. A New Blog.

I turned on comments for everyone, not just registered users.


Also, feeling the need to speak of things other than QMC, I've started a different blog for such topics. (I will continue posting here, I just wanted a place for non-QMC or non-MC related items).


[Edit - removed link to personal blog. Email me if you really want to know]

Atomic and Molecular orbital applets

Here's a course page that contains some atomic and molecular orbital applets. Look under the "Dry Laboratory Experiments", "D3" for atomic orbitals and "D4" for molecular orbitals.

Thursday, November 04, 2004

DMC timestep and equilbration

I've been reading Chebyshev and Fourier Spectral Methods by J. P. Boyd.


In Chapter 12, p. 227, he points out that if we are only interested in the steady state, errors in the decay rate don't matter.


Chapter 13 talks about operator splitting, which looks very similar to how Diffusion Monte Carlo gets the short time approximation to the propagator. Section 13.4 talks about consistency - basically timestep error in the steady state.


I had the realization that one could use large timesteps in DMC for the equilibration part, and smaller timesteps once equilibrium was reached. More drastic population control measures are probably needed as well if large timesteps are used (at least I always had trouble maintaining a stable population of walkers if the timestep is too large).


This particular idea isn't terribly profound, but I'm finding it very enlighting to read about other methods for solving differential equations. I also like the presentation level of this book - lots of practical, concrete considerations as well as descriptions of the consequences of various equations.

Wednesday, November 03, 2004

Qumax - QMC code

I signed up for Google Alerts a while back, and today I got a notice about Qumax, a Quantum Monte Carlo code.


I downloaded the code, but haven't tried compiling or running it yet.